197 research outputs found

    Three Essays on the Economics of Philanthropy

    Get PDF
    This dissertation studies the economics of prosocial behavior. More specifically, I investigate experimentally how prosocial incentives, pledges, and altruistic self-concept affect individuals\u27 prosocial behaviors in three chapters. Chapter 1 studies the role of self-chosen goals in shaping prosocial incentives\u27 motivation on individuals\u27 performance. The prosocial incentive is a way of motivation where workers\u27 payments are associated with an additional reward to the charities. It is now widely accepted by the firms because it helps build the corporate culture, and boosts employee\u27s morale, performance, and job satisfaction. However, recent studies have shown that a larger reward size does not necessarily increase workers\u27 prosocial incentives. To solve this limitation, I implement a self-chosen goal scheme along with the incentives. I design an online experiment in which participants set goals for themselves engaging in real effort tasks. Participants obtain prosocial rewards only when they reach their goals. My results show that workers who receive prosocial incentives improve their performance by setting higher goals and achieving them. Moreover, when provided with the opportunity to receive large rewards, workers who are matched with the charity\u27s mission will set higher goals to motivate themselves further to make additional efforts. My findings suggest prosocial incentives are comparable to monetary incentives in motivating workers within a self-chosen goal scheme. The preferred type of incentive depends on the firm\u27s target and worker\u27s heterogeneity. Chapter 2 investigates experimentally whether pledges with respect to when one volunteer increase volunteering. As shown in previous literature, the effect of pledging on volunteering is ambiguous. On one hand, pledges can boost volunteering as it offers volunteers the option to choose when to help others. On the other hand, pledges open the doors for individuals to find more ways to excuse themselves from having to volunteer. In this paper, we study how volunteering decisions are affected by pledges using an online experiment. We find that pledges increase reneging on promises to volunteer, but total effort donations do not change. We also develop a simple model that helps explain the ways in which relevant parameters affect behaviors in our experiment. In particular, our model predicts that when given the opportunity to pledge to volunteer, people with high altruism or high warm-glow prefer to volunteer sooner rather than later, while higher future expected participation costs and lower expected reneging costs result in lower rates of rejection immediately. Moreover, pledges increase reneging behaviors on the future date, because those who want to volunteer don\u27t delay their volunteering; however, those whose costs of saying no are high, are driven to postpone their rejection and renege on the future date. Chapter 3 digs deeper to study the effect of personality traits on the willingness to make and keep a promise to volunteer. In our experiment, Amazon Mechanical Turk participants are given the option to volunteer by donating time and effort to a charity. They also answer a series of questionnaires, including the Big Five personality test and attitudinal questionsthat we use to construct an index representing altruistic self-concept. Self-concept refers to the way we describe and evaluate ourselves. We find that altruistic self-concept mediates how personality affects volunteering decisions. In particular, agreeableness has a strong influence on the probability of making and keeping promises to volunteer through its effect on altruistic self-concept. Our findings have useful implications for non-profit organizations. Agreeable individuals who evaluate and describe themselves as altruistic can be more helpful and dependable, so that the organizations can find ways to strengthen altruistic self-concept, thereby positively influencing prosociality in the workplace

    Teaching Categories to Human Learners with Visual Explanations

    Get PDF
    We study the problem of computer-assisted teaching with explanations. Conventional approaches for machine teaching typically only provide feedback at the instance level e.g., the category or label of the instance. However, it is intuitive that clear explanations from a knowledgeable teacher can significantly improve a student's ability to learn a new concept. To address these existing limitations, we propose a teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information. In the case of images, we show that we can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Experiments on human learners illustrate that, on average, participants achieve better test set performance on challenging categorization tasks when taught with our interpretable approach compared to existing methods

    Teaching categories to human learners with visual explanations

    Get PDF
    We study the problem of computer-assisted teaching with explanations. Conventional approaches for machine teaching typically only provide feedback at the instance level e.g., the category or label of the instance. However, it is intuitive that clear explanations from a knowledgeable teacher can significantly improve a student's ability to learn a new concept. To address these existing limitations, we propose a teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information. In the case of images, we show that we can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Experiments on human learners illustrate that, on average, participants achieve better test set performance on challenging categorization tasks when taught with our interpretable approach compared to existing methods

    Interpretable Machine Teaching via Feature Feedback

    Get PDF
    A student’s ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments on real human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods

    Near-Optimal Machine Teaching via Explanatory Teaching Sets

    Get PDF
    Modern applications of machine teaching for humans often involve domain-specific, non- trivial target hypothesis classes. To facilitate understanding of the target hypothesis, it is crucial for the teaching algorithm to use examples which are interpretable to the human learner. In this paper, we propose NOTES, a principled framework for constructing interpretable teaching sets, utilizing explanations to accelerate the teaching process. Our algorithm is built upon a natural stochastic model of learners and a novel submodular surrogate objective function which greedily selects interpretable teaching examples. We prove that NOTES is competitive with the optimal explanation-based teaching strategy. We further instantiate NOTES with a specific hypothesis class, which can be viewed as an interpretable approximation of any hypothesis class, allowing us to handle complex hypothesis in practice. We demonstrate the effectiveness of NOTES on several image classification tasks, for both simulated and real human learners. Our experimental results suggest that by leveraging explanations, one can significantly speed up teaching

    Maat: Performance Metric Anomaly Anticipation for Cloud Services with Conditional Diffusion

    Full text link
    Ensuring the reliability and user satisfaction of cloud services necessitates prompt anomaly detection followed by diagnosis. Existing techniques for anomaly detection focus solely on real-time detection, meaning that anomaly alerts are issued as soon as anomalies occur. However, anomalies can propagate and escalate into failures, making faster-than-real-time anomaly detection highly desirable for expediting downstream analysis and intervention. This paper proposes Maat, the first work to address anomaly anticipation of performance metrics in cloud services. Maat adopts a novel two-stage paradigm for anomaly anticipation, consisting of metric forecasting and anomaly detection on forecasts. The metric forecasting stage employs a conditional denoising diffusion model to enable multi-step forecasting in an auto-regressive manner. The detection stage extracts anomaly-indicating features based on domain knowledge and applies isolation forest with incremental learning to detect upcoming anomalies. Thus, our method can uncover anomalies that better conform to human expertise. Evaluation on three publicly available datasets demonstrates that Maat can anticipate anomalies faster than real-time comparatively or more effectively compared with state-of-the-art real-time anomaly detectors. We also present cases highlighting Maat's success in forecasting abnormal metrics and discovering anomalies.Comment: This paper has been accepted by the Research track of the 38th IEEE/ACM International Conference on Automated Software Engineering (ASE 2023

    EvLog: Evolving Log Analyzer for Anomalous Logs Identification

    Full text link
    Software logs record system activities, aiding maintainers in identifying the underlying causes for failures and enabling prompt mitigation actions. However, maintainers need to inspect a large volume of daily logs to identify the anomalous logs that reveal failure details for further diagnosis. Thus, how to automatically distinguish these anomalous logs from normal logs becomes a critical problem. Existing approaches alleviate the burden on software maintainers, but they are built upon an improper yet critical assumption: logging statements in the software remain unchanged. While software keeps evolving, our empirical study finds that evolving software brings three challenges: log parsing errors, evolving log events, and unstable log sequences. In this paper, we propose a novel unsupervised approach named Evolving Log analyzer (EvLog) to mitigate these challenges. We first build a multi-level representation extractor to process logs without parsing to prevent errors from the parser. The multi-level representations preserve the essential semantics of logs while leaving out insignificant changes in evolving events. EvLog then implements an anomaly discriminator with an attention mechanism to identify the anomalous logs and avoid the issue brought by the unstable sequence. EvLog has shown effectiveness in two real-world system evolution log datasets with an average F1 score of 0.955 and 0.847 in the intra-version setting and inter-version setting, respectively, which outperforms other state-of-the-art approaches by a wide margin. To our best knowledge, this is the first study on tackling anomalous logs over software evolution. We believe our work sheds new light on the impact of software evolution with the corresponding solutions for the log analysis community
    • …
    corecore